64 research outputs found

    MultiLink Analysis: Brain Network Comparison via Sparse Connectivity Analysis

    Get PDF
    Abstract The analysis of the brain from a connectivity perspective is revealing novel insights into brain structure and function. Discovery is, however, hindered by the lack of prior knowledge used to make hypotheses. Additionally, exploratory data analysis is made complex by the high dimensionality of data. Indeed, to assess the effect of pathological states on brain networks, neuroscientists are often required to evaluate experimental effects in case-control studies, with hundreds of thousands of connections. In this paper, we propose an approach to identify the multivariate relationships in brain connections that characterize two distinct groups, hence permitting the investigators to immediately discover the subnetworks that contain information about the differences between experimental groups. In particular, we are interested in data discovery related to connectomics, where the connections that characterize differences between two groups of subjects are found. Nevertheless, those connections do not necessarily maximize the accuracy in classification since this does not guarantee reliable interpretation of specific differences between groups. In practice, our method exploits recent machine learning techniques employing sparsity to deal with weighted networks describing the whole-brain macro connectivity. We evaluated our technique on functional and structural connectomes from human and murine brain data. In our experiments, we automatically identified disease-relevant connections in datasets with supervised and unsupervised anatomy-driven parcellation approaches and by using high-dimensional datasets

    Quality Assessment of Retinal Fundus Images using Elliptical Local Vessel Density

    Get PDF
    Diabetic retinopathy is the leading cause of blindness in the Western world. The World Health Organisation estimates that 135 million people have diabetes mellitus worldwide and that the number of people with diabetes will increase to 300 million by the year 2025 (Amos et al., 1997). Timely detection and treatment for DR prevents severe visual loss in more than 50% of the patients (ETDRS, 1991). Through computer simulations is possible to demonstrate that prevention and treatment are relatively inexpensive if compared to the health care and rehabilitation costs incurred by visual loss or blindness (Javitt et al., 1994). The shortage of ophthalmologists and the continuous increase of the diabetic population limits the screening capability for effective timing of sight-saving treatment of typical manual methods. Therefore, an automatic or semi-automatic system able to detect various type of retinopathy is a vital necessity to save many sight-years in the population. According to Luzio et al. (2004) the preferred way to detect diseases such as diabetic retinopathy is digital fundus camera imaging. This allows the image to be enhanced, stored and retrieved more easily than film. In addition, images may be transferred electronically to other sites where a retinal specialist or an automated system can detect or diagnose disease while the patient remains at a remote location. Various systems for automatic or semi-automatic detection of retinopathy with fundus images have been developed. The results obtained are promising but the initial image quality is a limiting factor (Patton et al., 2006); this is especially true if the machine operator is not a trained photographer. Algorithms to correct the illumination or increase the vessel contrast exist (Chen & Tian, 2008; Foracchia et al., 2005; Grisan et al., 2006;Wang et al., 2001), however they cannot restore an image beyond a certain level of quality degradation. On the other hand, an accurate quality assessment algorithm can allow operators to avoid poor images by simply re-taking the fundus image, eliminating the need for correction algorithms. In addition, a quality metric would permit the automatic submission of only the best images if many are available. The measurement of a precise image quality index is not a straightforward task, mainly because quality is a subjective concept which varies even between experts, especially for images that are in the middle of the quality scale. In addition, image quality is dependent upon the type of diagnosis being made. For example, an image with dark regions might be considered of good quality for detecting glaucoma but of bad quality for detecting diabetic retinopathy. For this reason, we decided to define quality as the 'characteristics of an image that allow the retinopathy diagnosis by a human or software expert'. Fig. 1 shows some examples of macula centred fundus images whose quality is very likely to be judged as poor by many ophthalmologists. The reasons for this vary. They can be related to the camera settings like exposure or focal plane error (Fig. 1.(a,e,f)), the camera condition like a dirty or shuttered lens (Fig. 1.(d,h)), the movements of the patient which might blur the image (Fig. 1.(c)) or if the patient is not in the field of view of the camera (Fig. 1.(g)). We define an outlier as any image that is not a retina image which could be submitted to the screening system by mistake. Existing algorithms to estimate the image quality are based on the length of visible vessels in the macula region (Fleming et al., 2006), or edges and luminosity with respect to a reference image (Lalonde et al., 2001; Lee & Wang, 1999). Another method uses an unsupervised classifier that employs multi-scale filterbanks responses (Niemeijer et al., 2006). The shortcomings of these methods are either the fact that they do not take into account the natural variance encountered in retinal images or that they require a considerable time to produce a result. Additionally, none of the algorithms in the literature that we surveyed generate a 'quality measure'. Authors tend to split the quality levels into distinct classes and to classify images in particular ones. This approach is not really flexible and is error prone. In fact human experts are likely to disagree if many categories of image quality are used. Therefore, we think that a normalized 'quality measure' from 0 to 1 is the ideal way to approach the classification problem. Processing speed is another aspect to be taken into consideration. While algorithms to assess the disease state of the retina do not need to be particularly fast (within reason), the time response of the quality evaluation method is key towards the development of an automatic retinopathy screening system

    Novel autosegmentation spatial similarity metrics capture the time required to correct segmentations better than traditional metrics in a thoracic cavity segmentation workflow

    Get PDF
    Automated segmentation templates can save clinicians time compared to de novo segmentation but may still take substantial time to review and correct. It has not been thoroughly investigated which automated segmentation-corrected segmentation similarity metrics best predict clinician correction time. Bilateral thoracic cavity volumes in 329 CT scans were segmented by a UNet-inspired deep learning segmentation tool and subsequently corrected by a fourth-year medical student. Eight spatial similarity metrics were calculated between the automated and corrected segmentations and associated with correction times using Spearman\u27s rank correlation coefficients. Nine clinical variables were also associated with metrics and correction times using Spearman\u27s rank correlation coefficients or Mann-Whitney U tests. The added path length, false negative path length, and surface Dice similarity coefficient correlated better with correction time than traditional metrics, including the popular volumetric Dice similarity coefficient (respectively ρ = 0.69, ρ = 0.65, ρ =  - 0.48 versus ρ =  - 0.25; correlation p values \u3c 0.001). Clinical variables poorly represented in the autosegmentation tool\u27s training data were often associated with decreased accuracy but not necessarily with prolonged correction time. Metrics used to develop and evaluate autosegmentation tools should correlate with clinical time saved. To our knowledge, this is only the second investigation of which metrics correlate with time saved. Validation of our findings is indicated in other anatomic sites and clinical workflows. Novel spatial similarity metrics may be preferable to traditional metrics for developing and evaluating autosegmentation tools that are intended to save clinicians time

    Dental CLAIRES: Contrastive LAnguage Image REtrieval Search for Dental Research

    Full text link
    Learning about diagnostic features and related clinical information from dental radiographs is important for dental research. However, the lack of expert-annotated data and convenient search tools poses challenges. Our primary objective is to design a search tool that uses a user's query for oral-related research. The proposed framework, Contrastive LAnguage Image REtrieval Search for dental research, Dental CLAIRES, utilizes periapical radiographs and associated clinical details such as periodontal diagnosis, demographic information to retrieve the best-matched images based on the text query. We applied a contrastive representation learning method to find images described by the user's text by maximizing the similarity score of positive pairs (true pairs) and minimizing the score of negative pairs (random pairs). Our model achieved a hit@3 ratio of 96% and a Mean Reciprocal Rank (MRR) of 0.82. We also designed a graphical user interface that allows researchers to verify the model's performance with interactions.Comment: 10 pages, 7 figures, 4 table

    Psychomotor Impairment Detection via Finger Interactions with a Computer Keyboard During Natural Typing

    Get PDF
    Modern digital devices and appliances are capable of monitoring the timing of button presses, or finger interactions in general, with a sub-millisecond accuracy. However, the massive amount of high resolution temporal information that these devices could collect is currently being discarded. Multiple studies have shown that the act of pressing a button triggers well defined brain areas which are known to be affected by motor-compromised conditions. In this study, we demonstrate that the daily interaction with a computer keyboard can be employed as means to observe and potentially quantify psychomotor impairment. We induced a psychomotor impairment via a sleep inertia paradigm in 14 healthy subjects, which is detected by our classifier with an Area Under the ROC Curve (AUC) of 0.93/0.91. The detection relies on novel features derived from key-hold times acquired on standard computer keyboards during an uncontrolled typing task. These features correlate with the progression to psychomotor impairment (p < 0.001) regardless of the content and language of the text typed, and perform consistently with different keyboards. The ability to acquire longitudinal measurements of subtle motor changes from a digital device without altering its functionality may allow for early screening and follow-up of motor-compromised neurodegenerative conditions, psychological disorders or intoxication at a negligible cost in the general population.Comunidad de Madri

    Toward a Multimodal Computer-Aided Diagnostic Tool for Alzheimer’s Disease Conversion

    Full text link
    Alzheimer’s disease (AD) is a progressive neurodegenerative disorder. It is one of the leading sources of morbidity and mortality in the aging population AD cardinal symptoms include memory and executive function impairment that profoundly alters a patient’s ability to perform activities of daily living. People with mild cognitive impairment (MCI) exhibit many of the early clinical symptoms of patients with AD and have a high chance of converting to AD in their lifetime. Diagnostic criteria rely on clinical assessment and brain magnetic resonance imaging (MRI). Many groups are working to help automate this process to improve the clinical workflow. Current computational approaches are focused on predicting whether or not a subject with MCI will convert to AD in the future. To our knowledge, limited attention has been given to the development of automated computer-assisted diagnosis (CAD) systems able to provide an AD conversion diagnosis in MCI patient cohorts followed longitudinally. This is important as these CAD systems could be used by primary care providers to monitor patients with MCI. The method outlined in this paper addresses this gap and presents a computationally efficient preprocessing and prediction pipeline, and is designed for recognizing patterns associated with AD conversion. We propose a new approach that leverages longitudinal data that can be easily acquired in a clinical setting (e.g., T1-weighted magnetic resonance images, cognitive tests, and demographic information) to identify the AD conversion point in MCI subjects with AUC = 84.7. In contrast, cognitive tests and demographics alone achieved AUC = 80.6, a statistically significant difference (n = 669, p \u3c 0.05). We designed a convolutional neural network that is computationally efficient and requires only linear registration between imaging time points. The model architecture combines Attention and Inception architectures while utilizing both cross-sectional and longitudinal imaging and clinical information. Additionally, the top brain regions and clinical features that drove the model’s decision were investigated. These included the thalamus, caudate, planum temporale, and the Rey Auditory Verbal Learning Test. We believe our method could be easily translated into the healthcare setting as an objective AD diagnostic tool for patients with MCI

    Longitudinal Connectomes as a Candidate Progression Marker for Prodromal Parkinson’s Disease

    Get PDF
    Parkinson’s disease is the second most prevalent neurodegenerative disorder in the Western world. It is estimated that the neuronal loss related to Parkinson’s disease precedes the clinical diagnosis by more than 10 years (prodromal phase) which leads to a subtle decline that translates into non-specific clinical signs and symptoms. By leveraging diffusion magnetic resonance imaging brain (MRI) data evaluated longitudinally, at least at two different time points, we have the opportunity of detecting and measuring brain changes early on in the neurodegenerative process, thereby allowing early detection and monitoring that can enable development and testing of disease modifying therapies. In this study, we were able to define a longitudinal degenerative Parkinson’s disease progression pattern using diffusion magnetic resonance imaging connectivity information. Such pattern was discovered using a de novo early Parkinson’s disease cohort (n = 21), and a cohort of Controls (n = 30). Afterward, it was tested in a cohort at high risk of being in the Parkinson’s disease prodromal phase (n = 16). This progression pattern was numerically quantified with a longitudinal brain connectome progression score. This score is generated by an interpretable machine learning (ML) algorithm trained, with cross-validation, on the longitudinal connectivity information of Parkinson’s disease and Control groups computed on a nigrostriatal pathway-specific parcellation atlas. Experiments indicated that the longitudinal brain connectome progression score was able to discriminate between the progression of Parkinson’s disease and Control groups with an area under the receiver operating curve of 0.89 [confidence interval (CI): 0.81–0.96] and discriminate the progression of the High Risk Prodromal and Control groups with an area under the curve of 0.76 [CI: 0.66–0.92]. In these same subjects, common motor and cognitive clinical scores used in Parkinson’s disease research showed little or no discriminative ability when evaluated longitudinally. Results suggest that it is possible to quantify neurodegenerative patterns of progression in the prodromal phase with longitudinal diffusion magnetic resonance imaging connectivity data and use these image-based patterns as progression markers for neurodegeneration

    A Health Insurance Portability and Accountability Act–Compliant Ocular Telehealth Network for the Remote Diagnosis and Management of Diabetic Retinopathy

    Get PDF
    In this article, we present the design and implementation of a regional ocular telehealth network for remote assessment and management of diabetic retinopathy (DR), including the design requirements, network topology, protocol design, system work flow, graphics user interfaces, and performance evaluation. The Telemedical Retinal Image Analysis and Diagnosis Network is a computer-aided, image analysis telehealth paradigm for the diagnosis of DR and other retinal diseases using fundus images acquired from primary care end users delivering care to underserved patient populations in the mid-South and southeastern United States

    Analyse de "Fundus" image par le diagnostique de la retinopathie diabétique

    No full text
    In this Ph.D. thesis, we study new methods to analyse digital fundus images of diabetic patients. In particular, we concentrate on the development of the algorithmic components of an automatic screening system for diabetic retinopathy. The techniques developed can be categorized in: quality assessment and improvement, lesion segmentation and diagnosis. For the first category, we present a fast algorithm to numerically estimate the quality of a single image by employing vasculature and colour-based features; additionally, we show how it is possible to increase the image quality and remove reflection artefacts by merging information gathered in multiple fundus images (which are captured by changing the stare point of the patient). For the second category, two families of lesion are targeted: exudate and microaneurysms; two new algorithms which work on single fundus images are proposed and compared with existing techniques in order to prove their efficacy; in the microaneurysms case, a new Radon transform-based operator was developed. In the last diagnosis category, we have developed an algorithm that diagnoses diabetic retinopathy and diabetic macular edema based on the lesions segmented; starting from a single unseen image, our algorithm can generate a diabetic retinopathy and ma cular edema diagnosis in _22 seconds on a 1.6 GHz machine with 4 GB of RAM; additionally, we show the first results of a macular edema detection algorithm based on multiple fundus images, which can potentially identify the swelling of the macula even when no lesions are visible.Cette thĂšse a pour objet l’étude de nouvelles mĂ©thodes de traitement d’image appliquĂ©es Ă  l’analyse d’images numĂ©riques du fond d'Ɠil de patients diabĂ©tiques. En particulier, nous nous sommes concentrĂ©s sur le dĂ©veloppement algorithmique supportant un systĂšme de dĂ©pistage automatique de la rĂ©tinopathie diabĂ©tique. Les techniques prĂ©sentĂ©es dans ce document peuvent ĂȘtre classĂ©es en trois catĂ©gories: (1) l’évaluation et l’amĂ©lioration de la qualitĂ© d’image, (2) la segmentation des lĂ©sions, et (3) le diagnostic. Pour la premiĂšre catĂ©gorie, nous prĂ©sentons un algorithme rapide permettant l’estimation numĂ©rique de la qualitĂ© d’une seule image Ă  partir de caractĂ©ristiques extraites de la vascularisation et de la couleur du fond d'Ɠil. De plus, nous dĂ©montrons qu’il est possible d’augmenter la qualitĂ© des images et de supprimer les artefacts de rĂ©flexion en fusionnant les informations extraites de plusieurs images d’un mĂȘme fond d'Ɠil (images capturĂ©es en changeant le point d’attention regardĂ© par le patient). Pour la deuxiĂšme catĂ©gorie, deux familles de lĂ©sion sont ciblĂ©es: les exsudats et les microanĂ©vrysmes. Deux nouveaux algorithmes pour l’analyse des images du fond d'Ɠil sont proposĂ©s et comparĂ©s avec les techniques existantes afin de dĂ©montrer leur efficacitĂ©. Dans le cas des microanĂ©vrismes, une nouvelle mĂ©thode basĂ©e sur la transformĂ©e de Radon a Ă©tĂ© dĂ©veloppĂ©e. Dans la derniĂšre catĂ©gorie, nous prĂ©sentons un algorithme permettant de diagnostiquer la rĂ©tinopathie diabĂ©tique et les ƓdĂšmes maculaires en analysant les lĂ©sions dĂ©tectĂ©es par segmentation d’image; Ă  partir d’une seule image, notre algorithme permet de diagnostiquer une rĂ©tinopathie diabĂ©tique et/ou un ƓdĂšme maculaire en ~ 22 secondes sur une machine Ă  1,6 GHz avec 4 Go de RAM; de plus, nous montrons les premiers rĂ©sultats de notre algorithme de dĂ©tection d'ƓdĂšme maculaire basĂ© sur des images du fond d'Ɠil multiples, qui peut Ă©ventuellement permettre d’identifier le gonflement de la macula mĂȘme si aucune lĂ©sion n’est visible
    • 

    corecore